4 research outputs found
A Broadband Meta surface Based MIMO Antenna with High Gain and Isolation For 5G Millimeter Wave Applications
This paper proposes a Broadband Meta surface-based MIMO Antenna with High Gain and Isolation For 5G Millimeter applications. A single antenna is transformed into an array configuration to improve gain. As a result, each MIMO antenna is made up of a 1x2 element array supplied by a concurrent feedline. A 9x6 Split Ring Resonator (SRR) elongated cell is stacked above the antenna to improve gain and eliminate the coupling effects between the MIMO components. The substrate Rogers 5880 with a thickness of 0.787mm and 1.6mm is used for the antenna and meta surface. Furthermore, antenna performance is assessed using S-parameters, MIMO characteristics, and radiation patterns. The final designed antenna supports 5G applications by embracing the mm-wave frequency spectrum at Ka-band, there is a noticeable increase in gain. In addition, once the meta surface is introduced, there is an improvement in isolation. 
Deep Learning Models on CPUs: A Methodology for Efficient Training
GPUs have been favored for training deep learning models due to their highly
parallelized architecture. As a result, most studies on training optimization
focus on GPUs. There is often a trade-off, however, between cost and efficiency
when deciding on how to choose the proper hardware for training. In particular,
CPU servers can be beneficial if training on CPUs was more efficient, as they
incur fewer hardware update costs and better utilizing existing infrastructure.
This paper makes several contributions to research on training deep learning
models using CPUs. First, it presents a method for optimizing the training of
deep learning models on Intel CPUs and a toolkit called ProfileDNN, which we
developed to improve performance profiling. Second, we describe a generic
training optimization method that guides our workflow and explores several case
studies where we identified performance issues and then optimized the Intel
Extension for PyTorch, resulting in an overall 2x training performance increase
for the RetinaNet-ResNext50 model. Third, we show how to leverage the
visualization capabilities of ProfileDNN, which enabled us to pinpoint
bottlenecks and create a custom focal loss kernel that was two times faster
than the official reference PyTorch implementation
MLPerf Inference Benchmark
Machine-learning (ML) hardware and software system demand is burgeoning.
Driven by ML applications, the number of different ML inference systems has
exploded. Over 100 organizations are building ML inference chips, and the
systems that incorporate existing models span at least three orders of
magnitude in power consumption and five orders of magnitude in performance;
they range from embedded devices to data-center solutions. Fueling the hardware
are a dozen or more software frameworks and libraries. The myriad combinations
of ML hardware and ML software make assessing ML-system performance in an
architecture-neutral, representative, and reproducible manner challenging.
There is a clear need for industry-wide standard ML benchmarking and evaluation
criteria. MLPerf Inference answers that call. In this paper, we present our
benchmarking method for evaluating ML inference systems. Driven by more than 30
organizations as well as more than 200 ML engineers and practitioners, MLPerf
prescribes a set of rules and best practices to ensure comparability across
systems with wildly differing architectures. The first call for submissions
garnered more than 600 reproducible inference-performance measurements from 14
organizations, representing over 30 systems that showcase a wide range of
capabilities. The submissions attest to the benchmark's flexibility and
adaptability.Comment: ISCA 202